Invariant Stochastic Encoders
نویسنده
چکیده
The theory of stochastic vector quantisers (SVQ) has been extended to allow the quantiser to develop invariances, so that only"large"degrees of freedom in the input vector are represented in the code. This has been applied to the problem of encoding data vectors which are a superposition of a"large"jammer and a"small"signal, so that only the jammer is represented in the code. This allows the jammer to be subtracted from the total input vector (i.e. the jammer is nulled), leaving a residual that contains only the underlying signal. The main advantage of this approach to jammer nulling is that little prior knowledge of the jammer is assumed, because these properties are automatically discovered by the SVQ as it is trained on examples of input vectors.
منابع مشابه
On the Structure of Real-Time Encoders and Decoders in a Multi-Terminal Communication System
A real-time communication system with two encoders communicating with a single receiver over separate noisy channels is considered. The two encoders make distinct partial observations of a Markov source. Each encoder must encode its observations into a sequence of discrete symbols. The symbols are transmitted over noisy channels to a finite memory receiver that attempts to reconstruct some func...
متن کاملOptimal and near-optimal encoders for short and moderate-length tailbiting trellises
The results of an extensive search for short and moderatelength polynomial convolutional encoders for time-invariant tail-biting representations of block codes at rates R = 1=4; 1=3; 1=2; and 2=3 are reported. The tail-biting representations found are typically as good as the best known block codes.
متن کاملLearning invariant features through local space contraction
We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jac...
متن کاملOptimal and near-optimal encoders for short and moderate-length tail-biting trellises
The results of an extensive search for short and moderatelength polynomial convolutional encoders for time-invariant tail-biting representations of block codes at rates R = 1=4; 1=3; 1=2; and 2=3 are reported. The tail-biting representations found are typically as good as the best known block codes.
متن کاملLearning Discrete Representations via Information Maximizing Self-Augmented Training
Our method is related to denoising auto-encoders (Vincent et al., 2008). Auto-encoders maximize a lower bound of mutual information (Cover & Thomas, 2012) between inputs and their hidden representations (Vincent et al., 2008), while the denoising mechanism regularizes the auto-encoders to be locally invariant. However, such a regularization does not necessarily impose the invariance on the hidd...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره cs.NE/0408050 شماره
صفحات -
تاریخ انتشار 2004